262 research outputs found

    Global convergence of splitting methods for nonconvex composite optimization

    Full text link
    We consider the problem of minimizing the sum of a smooth function hh with a bounded Hessian, and a nonsmooth function. We assume that the latter function is a composition of a proper closed function PP and a surjective linear map M\cal M, with the proximal mappings of τP\tau P, τ>0\tau > 0, simple to compute. This problem is nonconvex in general and encompasses many important applications in engineering and machine learning. In this paper, we examined two types of splitting methods for solving this nonconvex optimization problem: alternating direction method of multipliers and proximal gradient algorithm. For the direct adaptation of the alternating direction method of multipliers, we show that, if the penalty parameter is chosen sufficiently large and the sequence generated has a cluster point, then it gives a stationary point of the nonconvex problem. We also establish convergence of the whole sequence under an additional assumption that the functions hh and PP are semi-algebraic. Furthermore, we give simple sufficient conditions to guarantee boundedness of the sequence generated. These conditions can be satisfied for a wide range of applications including the least squares problem with the 1/2\ell_{1/2} regularization. Finally, when M\cal M is the identity so that the proximal gradient algorithm can be efficiently applied, we show that any cluster point is stationary under a slightly more flexible constant step-size rule than what is known in the literature for a nonconvex hh.Comment: To appear in SIOP

    Calculus of the exponent of Kurdyka-{\L}ojasiewicz inequality and its applications to linear convergence of first-order methods

    Full text link
    In this paper, we study the Kurdyka-{\L}ojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo-Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is 12\frac12. The Luo-Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function's KL exponent is 12\frac12. This includes the least squares problem with smoothly clipped absolute deviation (SCAD) regularization or minimax concave penalty (MCP) regularization and the logistic regression problem with 1\ell_1 regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step-sizes for some specific models that arise in sparse recovery.Comment: The paper is accepted for publication in Foundations of Computational Mathematics: https://link.springer.com/article/10.1007/s10208-017-9366-8. In this update, we fill in more details to the proof of Theorem 4.1 concerning the nonemptiness of the projection onto the set of stationary point

    SOS-Hankel Tensors: Theory and Application

    Full text link
    Hankel tensors arise from signal processing and some other applications. SOS (sum-of-squares) tensors are positive semi-definite symmetric tensors, but not vice versa. The problem for determining an even order symmetric tensor is an SOS tensor or not is equivalent to solving a semi-infinite linear programming problem, which can be done in polynomial time. On the other hand, the problem for determining an even order symmetric tensor is positive semi-definite or not is NP-hard. In this paper, we study SOS-Hankel tensors. Currently, there are two known positive semi-definite Hankel tensor classes: even order complete Hankel tensors and even order strong Hankel tensors. We show complete Hankel tensors are strong Hankel tensors, and even order strong Hankel tensors are SOS-Hankel tensors. We give several examples of positive semi-definite Hankel tensors, which are not strong Hankel tensors. However, all of them are still SOS-Hankel tensors. Does there exist a positive semi-definite non-SOS-Hankel tensor? The answer to this question remains open. If the answer to this question is no, then the problem for determining an even order Hankel tensor is positive semi-definite or not is solvable in polynomial-time. An application of SOS-Hankel tensors to the positive semi-definite tensor completion problem is discussed. We present an ADMM algorithm for solving this problem. Some preliminary numerical results on this algorithm are reported

    A Tensor Analogy of Yuan's Theorem of the Alternative and Polynomial Optimization with Sign structure

    Full text link
    Yuan's theorem of the alternative is an important theoretical tool in optimization, which provides a checkable certificate for the infeasibility of a strict inequality system involving two homogeneous quadratic functions. In this paper, we provide a tractable extension of Yuan's theorem of the alternative to the symmetric tensor setting. As an application, we establish that the optimal value of a class of nonconvex polynomial optimization problems with suitable sign structure (or more explicitly, with essentially non-positive coefficients) can be computed by a related convex conic programming problem, and the optimal solution of these nonconvex polynomial optimization problems can be recovered from the corresponding solution of the convex conic programming problem. Moreover, we obtain that this class of nonconvex polynomial optimization problems enjoy exact sum-of-squares relaxation, and so, can be solved via a single semidefinite programming problem.Comment: acceted by Journal of Optimization Theory and its application, UNSW preprint, 22 page

    Peaceman-Rachford splitting for a class of nonconvex optimization problems

    Full text link
    We study the applicability of the Peaceman-Rachford (PR) splitting method for solving nonconvex optimization problems. When applied to minimizing the sum of a strongly convex Lipschitz differentiable function and a proper closed function, we show that if the strongly convex function has a large enough strong convexity modulus and the step-size parameter is chosen below a threshold that is computable, then any cluster point of the sequence generated, if exists, will give a stationary point of the optimization problem. We also give sufficient conditions guaranteeing boundedness of the sequence generated. We then discuss one way to split the objective so that the proposed method can be suitably applied to solving optimization problems with a coercive objective that is the sum of a (not necessarily strongly) convex Lipschitz differentiable function and a proper closed function; this setting covers a large class of nonconvex feasibility problems and constrained least squares problems. Finally, we illustrate the proposed algorithm numerically
    corecore